Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels

نویسندگان

چکیده

We establish a scale separation of Kolmogorov width type between subspaces given Banach space under the condition that sequence linear maps converges much faster on one subspaces. The general technique is then applied to show reproducing kernel Hilbert spaces are poor $$L^{2}$$ -approximators for class two-layer neural networks in high dimension, and multi-layer with small path norm approximators certain Lipschitz functions, also -topology.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Porosity classification from thin sections using image analysis and neural networks including shallow and deep learning in Jahrum formation

The porosity within a reservoir rock is a basic parameter for the reservoir characterization. The present paper introduces two intelligent models for identification of the porosity types using image analysis. For this aim, firstly, thirteen geometrical parameters of pores of each image were extracted using the image analysis techniques. The extracted features and their corresponding pore types ...

متن کامل

Gyroscope Random Drift Modeling, using Neural Networks, Fuzzy Neural and Traditional Time- series Methods

In this paper statistical and time series models are used for determining the random drift of a dynamically Tuned Gyroscope (DTG). This drift is compensated with optimal predictive transfer function. Also nonlinear neural-network and fuzzy-neural models are investigated for prediction and compensation of the random drift. Finally the different models are compared together and their advantages a...

متن کامل

Reinforcement Learning in Neural Networks: A Survey

In recent years, researches on reinforcement learning (RL) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. Neural network reinforcement learning (NNRL) is among the most popular algorithms in the RL framework. The advantage of using neural networks enables the RL to search for optimal policies more efficiently in several real-life applicat...

متن کامل

Reinforcement Learning in Neural Networks: A Survey

In recent years, researches on reinforcement learning (RL) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. Neural network reinforcement learning (NNRL) is among the most popular algorithms in the RL framework. The advantage of using neural networks enables the RL to search for optimal policies more efficiently in several real-life applicat...

متن کامل

Fuzzy Neural Networks as “Good” Function Approximators

The paper discusses the generalization capability of two hidden layer neural networks based on various fuzzy operators introduced earlier by the authors as Fuzzy Flip-Flop based Neural Networks (FNNs), in comparison with standard networks tansig function based, MATLAB Neural Network Toolbox in the frame of a simple function approximation problem. Various fuzzy neurons, one of them based on new ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Research in the Mathematical Sciences

سال: 2021

ISSN: ['2522-0144', '2197-9847']

DOI: https://doi.org/10.1007/s40687-020-00233-4